Building on the simulations in Chapter 1, this section estimates the spectrum of fairness. It complements Section 4.3 of the main paper, linked from the supplement’s introduction and in the left margin of the current interface. We recommend keeping the main paper nearby, as this supplement focuses exclusively on implementation.
Packages for this section
library(tidyverse)library(latex2exp)library(jsonlite)## also setup python for optimal transport. library(reticulate)get_python_env_path <-function(file_path ="python_env_path.txt") {if (!file.exists(file_path)) {stop("The file 'python_env_path.txt' is missing. Please create it and specify your Python environment path.") }# Read the file and extract the first non-comment, non-empty line env_path <-readLines(file_path, warn =FALSE) env_path <- env_path[!grepl("^#", env_path) &nzchar(env_path)]if (length(env_path) ==0) {stop("No valid Python environment path found in 'python_env_path.txt'. Please enter your path.") }return(trimws(env_path[1]))}tryCatch({ python_env_path <-get_python_env_path()use_python(python_env_path, required =TRUE)message("Python environment successfully set to: ", python_env_path)}, error =function(e) {stop(e$message)})
Data for this section
sims <-fromJSON('simuls/train_scenarios.json')valid <-fromJSON('simuls/valid_scenarios.json')test <-fromJSON('simuls/test_scenarios.json')## For the experiment in section 5sim_samples <- jsonlite::fromJSON('simuls/sim_study.json')
In this section, we provide an example of detailed methodology for estimating the benchmarks premiums from the spectrum of fairness defined in Section 4.1 of the main paper. It complements the estimation of Section 4.1.
In this paper, premiums are estimated using lightgbm algorithms from Ke et al. (2017), a package that supports common actuarial distributions (Tweedie, Poisson, Gamma, etc.) and was found to have excellent predictive performance compared to other boosting algorithm in Chevalier and Côté (2024). Given lightgbm’s flexibility, we advocate for careful variable preselection to:
Ensure comprehensive coverage of risk factors,
Reduce redundancy and potential multicollinearity while preserving predictive value.
We optimize hyperparameters with a validation set to prevent overfitting. Key hyperparameters in lightgbm are the number of trees (num_iterations), regularization (lambda_l1, lambda_l2) for complexity control, and subsampling percentages (feature_fraction, bagging_fraction) .
The data used to construct prices and the estimated spectrum should align to ensure comparability. In the Case Study of the main paper, we conduct a posteriori benchmarking, as the study period is historical. A posteriori assessment can reveal segments where the model misaligned from fairness dimensions after the fact. In contrast, conducting benchmarking concurrently with the development of a new pricing model provides an estimate of how well the commercial price aligns with fairness dimensions for future periods, though the true fairness implications of the commercial price will only become evident retrospectively.
Discretizing \(D\)
Subsequent steps involve numerous computations over all possible values of \(D\). When \(D\) is continuous, discretization can improve tractability. The discretized version should be used throughout the estimation of the spectrum, and the discretization function should be kept for future applications. For multivariate \(D\), one approach is to first discretize continuous components, then treat the vector as one categorical variable where each unique combination defines a category. Care should be taken to ensure a manageable number of categories, with sufficient representation in each.
Balancing benchmark premiums
Profit and expense loadings are important, but they must not reintroduce unfairness. To align premium benchmarks with expected profits, we multiply the premiums by a factor \(\delta_j \geq 1~\forall j \in \{B, U, A, H, C\}\) to globally balance all premiums to the level of the commercial price.
In what follows, we detail how we estimate each benchmark premium in our framework.
2.1 Best-estimate premium
The best estimate premium\(\widehat{\mu}^B(\mathbf{x}, d)\) serves as the anchor. Thus, its estimation is crucial for our framework. It provides the best predictor of the response variable \(Y\) when differentiating risks based on \((X, D)\). It can be derived from indicated rates or data-driven (what we do) estimates as detailed in Complement 5 of the main paper.
Related section
The estimation strategy for the five families of fair premium is described in Section 4.3 of the main paper.
Training the data-driven best-estimate price
source('___lgb_best_estimate.R')## clean the pred repounlink(file.path('preds', "*_best_estimate.json"))# Define grid for hyperparameter optimization hyperparameter_grid <-expand.grid(learning_rate =c(0.01),feature_fraction =c(0.75),bagging_fraction =c(0.75),max_depth =c(5),lambda_l1 =c(0.5),lambda_l2 =c(0.5) )best_lgb <-setNames(nm =names(sims)) %>%lapply(function(name){ list_df <-list('train'= sims[[name]],'valid'= valid[[name]],'test'= test[[name]])the_best_estimate_lightgbm_fun(list_data = list_df, name = name,hyperparameter_grid = hyperparameter_grid)})
Best for scenario: Scenario1
hyperparam 1 / 1
Best valid mse: 53.28979
optimal ntree: 605
Training time: 15.16073 sec.
Best for scenario: Scenario2
hyperparam 1 / 1
Best valid mse: 53.43552
optimal ntree: 540
Training time: 15.29007 sec.
Best for scenario: Scenario3
hyperparam 1 / 1
Best valid mse: 52.30437
optimal ntree: 518
Training time: 13.96573 sec.
Training the best-estimate price on experimental data (for later use in section 5)
The unaware price, \(\mu^U(\mathbf{x})\), excludes direct use of \(D\). It is defined as the best predictor of \(\widehat{\mu}^B(\mathbf{X}, D)\) given \(\mathbf{x}\): \[\begin{equation*}
\widehat{\mu}^U(\mathbf{x}) = E\{\widehat{\mu}^B(\mathbf{x}, D) | X = \mathbf{x}\}.
\end{equation*}\]
This maintains consistency with the best estimate premium, whether \(\widehat{\mu}^B\) is data-driven or based on indicated rates (Complement 5 of the main paper). One can use a to predict \(\widehat{\mu}^B(\mathbf{X}, D)\) based on \(\mathbf{X}\). The loss function defaults to mean squared error as it is a reasonable distributional assumption for this response variable.
Alternatively (what we do), one can estimate the propensity and explicitly weight the best-estimate premiums based on predicted propensity. This also keeps consistency with the best-estimate.
Pdx for scenario: Scenario1
hyperparam 1 / 1
Best valid binary logloss: 0.5808773
optimal ntree: 391
Training time: 15.42748 sec.
Pdx for scenario: Scenario2
hyperparam 1 / 1
Best valid binary logloss: 0.5808827
optimal ntree: 416
Training time: 15.39385 sec.
Pdx for scenario: Scenario3
hyperparam 1 / 1
Best valid binary logloss: 0.6352072
optimal ntree: 433
Training time: 19.27892 sec.
Training the propensity function on experimental data (for later use in section 5)
The aware premium, \(\widehat{\mu}^A(\mathbf{x})\), requires knowledge of the marginal distribution of \(D\). The aware price, a particular case of the ``discrimination-free’’ price of Lindholm et al. (2022), is computed as: \[\begin{equation*}
\widehat{\mu}^A(\mathbf{x}) = \sum_{d\in \mathcal{D}} \widehat{\mu}^B(\mathbf{x}, d) \widehat{\Pr}(D = d).
\end{equation*}\] As discussed earlier, we assume \(D\) is discrete or discretized. If the training sample is representative of the target population (portfolio, market, or region?), empirical proportions are consistent estimators of \(\widehat{\Pr}(D = d)\). This is also an estimator suggested in Section 5 of Lindholm et al. (2022).
Computing the empirical proportions for protected supopulations
marg_dist <- sims %>%lapply(function(data){ data$D %>% table %>% prop.table})## for later usesaveRDS(marg_dist, 'preds/the_empirical_proportions.rds')## Computing the empirical proportions for protected supopulations on experimental data (for later use in section 5)marg_dist_sims <-seq_along(sim_samples) %>%lapply(function(idx){ sim_samples[[idx]]$train$D %>% table %>% prop.table})
Note
This method generalizes the use of \(D\) as a control variable to prevent distortions in estimated effect of \(\mathbf{X}\) due to omitted variable bias. It applies to all model types and has a causal interpretation under the assumptions that: (i) possible values of \(\mathbf{X}\) are observed across all groups of \(D\) (no extrapolation), (ii) \(D\) causes both \(\mathbf{X}\) and \(Y\), and (iii) there are no relevant unobserved variables. Under the same assumptions, other methods are estimators of the aware premium. One example is inverse probability weighting, which re-weights observations to (artificially) remove the association between \(\mathbf{X}\) and \(D\), preventing proxy effects.
2.4 Corrective premium
We estimate the corrective premium, \(\widehat{\mu}^C(\mathbf{x}, d)\), designed to enforce strong demographic parity by aligning premium distributions across protected groups. Various methods can achieve this, but we recommend the optimal transport approach of Hu, Ratz, and Charpentier (2024). Its advantage lies in modifying \(\widehat{\mu^B}(\mathbf{x}, d)\) while keeping the overall spectrum estimation anchored to the initial best-estimate premium. As an incremental adjustment, it minimizes modeling complexity while enforcing demographic parity, yielding a simple estimator for the corrective premium.
The corrective premium is computed by training a transport function that shifts best-estimate premiums for each protected group toward a common barycenter, ensuring demographic parity through corrective direct discrimination. See Hu, Ratz, and Charpentier (2024) for details. This optimal transport method is implemented in Python via the Equipy package of Fernandes Machado et al. (2025). The following chunk illustrates how this method, despite its complexity, translates into concise Python code.
Listing 2.1: Illustrative Python code for wasserstein transport toward corrective premiums.
import numpy as npimport pandas as pdfrom equipy.fairness import FairWasserstein# Load best-estimate premiums and associated sensitive attributesbest_est_train = pd.read_csv('best_est_prem_train.csv') # Training premiumssens_train = pd.read_csv('sensitive_train.csv') # discrete sensitive data# Train Fair Wasserstein transport to adjust premiumsbarycenter = FairWasserstein()barycenter.fit(best_est.values, sens.values)# Load best-estimate premiums and associated sensitive attributesbest_est_test = pd.read_csv('best_est_prem_test.csv') # Training premiumssens_test = pd.read_csv('sensitive_test.csv') # discrete sensitive data# Apply transformation to obtain fairness-adjusted premiumscorrective_premiums_test = barycenter.transform(best_est_test.values, sens_test.values)# Save resultspd.DataFrame(corrective_premiums).to_csv('corr_prem_test.csv', index=False)
Optimal transport and Wasserstein distance : technical details
Let \(Z\) be a quantity of interest. Let \(\nu_0\) and \(\nu_1\) be two laws of \(Z\) (finite \(p\)-th moments, \(p\ge1\)).
Probabilistic definition (pairing).
Over all joint laws with the right marginals,
Interpretation: the smallest average \(p\)-power gap under any admissible pairing. Units: same as \(Z\); \(W_p=0\) iff \(\nu_0=\nu_1\).
Univariate formula (what we compute).
Let \(F\) and \(G\) be the CDFs of \(\nu_0\) and \(\nu_1\), with quantiles \(F^{-1}\) and \(G^{-1}\). Then
\[
W_p(\nu_0,\nu_1)=\left(\int_{0}^{1}\!\big|F^{-1}-G^{-1}\big|^p\,du\right)^{1/p}.
\] For \(p=1\), \[
W_1(\nu_0,\nu_1)=\int_{0}^{1}\!\big|F^{-1}(u)-G^{-1}(u)\big|\,du
\]Interpretation:\(W_1\) is the average absolute gap between matched quantiles (interpretable in dollars if \(Z\) is a monetary amount).
Empirical, univariate (equal sample sizes).
Given samples \(z^{(0)}_{1:n}\) from \(\mu\) and \(z^{(1)}_{1:n}\) from \(\nu\), sort: \[
z^{(0)}_{(1)}\le\cdots\le z^{(0)}_{(n)},\qquad
z^{(1)}_{(1)}\le\cdots\le z^{(1)}_{(n)}.
\] Then \[
\widehat W_p^p=\frac{1}{n}\sum_{i=1}^{n}\big|z^{(1)}_{(i)}-z^{(0)}_{(i)}\big|^p.
\]
Unequal exposures (weighted quantiles).
Pick a grid size \(m\in\mathbb N\) and midpoints \(u_k=(k-\tfrac12)/m\) for \(k=1,\dots,m\) (so \(\Delta u=1/m\)).
If \(Q_1\) and \(Q_0\) are the (exposure-weighted) quantile functions of the two groups, then \[
\widehat W_p^{\,p}\ \approx\ \frac{1}{m}\sum_{k=1}^{m}\big|Q_1(u_k)-Q_0(u_k)\big|^p.
\]
Relevance for actuaries: it compares entire distributions (center and tails), reports in dollar units, and is heavily theoretically grounded.
Computing the optimal transport mapping for the scenarios
source_python("___opt_transp.py")## now, the folder 'transported' exist, with one epsilon_y file per scenario
To predict corrective premium on other datasets, one can alternatively train a lightgbm model of mapping from \((X, D)\) to the resulting corrective premiums from Equipy on the training sample.
Computing the optimal transport mapping for the scenarios
Mapping to corr. for scenario: Scenario1
Data import: 0.4800718 sec.
Data conversion: 0.01207399 sec.
Lgb training : 19.99445 sec.
Mapping to corr. for scenario: Scenario2
Data import: 0.389869 sec.
Data conversion: 0.01291299 sec.
Lgb training : 37.44629 sec.
Mapping to corr. for scenario: Scenario3
Data import: 0.450701 sec.
Data conversion: 0.133497 sec.
Lgb training : 26.2301 sec.
2.5 Hyperaware premium
The hyperaware premium, \(\widehat{\mu}^H(\mathbf{x})\), is the best non-directly discriminatory approximation of the corrective premium. We construct a lightgbm using only \(\mathbf{X}\) to predict the corrective premium, aiming at demographic parity without direct reliance on \(D\): \[\begin{equation*}
\widehat{\mu}^H(\mathbf{x}) = E\{\widehat{\mu}^C(\mathbf{x}, D) | X = \mathbf{x}\}.
\end{equation*}\]
Since the response variable is a premium, MSE is a correct choice of loss function in most cases. Just as for the unaware premium, one can explicitly aggregate corrective premiums across protected groups using estimated propensities as weights. This is what we do here.
2.6 Visualization the estimated spectrum
By combining all the trained models as component into a trained spectrum of fairness, we can define the premium function that will serve to predict the premiums
Defining the estimated premium and metrics functions
premiums <-setNames(nm =names(sims)) %>%lapply(function(name){premium_generator(best = best_lgb[[name]]$pred_fun, pdx = pdx_lgb[[name]]$pred_fun, maps_to_corr = corr_mapping_lgb[[name]], marg = marg_dist[[name]])})quants <-setNames(nm =names(sims)) %>%lapply(function(name){quant_generator(premiums = premiums[[name]])})## For experimental data (future use in section 5)premiums_sims <-setNames(obj =seq_along(sim_samples),nm =names(sim_samples)) %>%lapply(function(idx){premium_generator(best = best_sims[[idx]]$pred_fun, pdx = pdx_sims[[idx]]$pred_fun, maps_to_corr = corr_mapping_lgb[['Scenario1']], ## We put anything, won't be used anyway as we focus on proxy vulnerabiltiy for section 5. marg = marg_dist_sims[[idx]])})quants_sims <-setNames(obj =seq_along(sim_samples),nm =names(sim_samples)) %>%lapply(function(idx){quant_generator(premiums = premiums_sims[[idx]])})
Computation of estimated premiums and local metrics across the grid
df_to_g_file <-"preds/df_to_g.json"# Check if the JSON file existsif (file.exists(df_to_g_file)) {message(sprintf("[%s] File exists. Reading df_to_g from %s", Sys.time(), df_to_g_file)) df_to_g <-fromJSON(df_to_g_file)} else {## From the first section of the online supplementpreds_grid_stats_theo <-fromJSON('preds/preds_grid_stats_theo.json')df_to_g <-setNames(nm =names(sims)) %>%lapply(function(name) {message(sprintf("[%s] Processing: %s", Sys.time(), name)) local_scenario_df <- preds_grid_stats_theo[[name]]# Start time for this scenario start_time <-Sys.time()# Step 1: Compute premiumsmessage(sprintf("[%s] Step 1: Computing premiums", Sys.time())) premium_results <-setNames(nm = levels_for_premiums) %>%sapply(function(s) {message(sprintf("[%s] Computing premium: %s", Sys.time(), s)) premiums[[name]][[s]](x1 = local_scenario_df$x1,x2 = local_scenario_df$x2,d = local_scenario_df$d ) })# Step 2: Compute PDXmessage(sprintf("[%s] Step 2: Computing PDX", Sys.time())) pdx_results <- pdx_lgb[[name]]$pred_fun(data.frame(X1 = local_scenario_df$x1,X2 = local_scenario_df$x2,D = local_scenario_df$d ) )# Combine resultsmessage(sprintf("[%s] Step 5: Combining results", Sys.time())) result <-data.frame( local_scenario_df, premium_results,pdx = pdx_results )# Log completion time end_time <-Sys.time()message(sprintf("[%s] Finished processing: %s (Duration: %.2f seconds)", end_time, name, as.numeric(difftime(end_time, start_time, units ="secs"))))return(result)})# Save the entire df_to_g object to JSONmessage(sprintf("[%s] Saving df_to_g to %s", Sys.time(), df_to_g_file))toJSON(df_to_g, pretty =TRUE, auto_unbox =TRUE) %>%write(df_to_g_file)rm(df_to_g_theo)}grid_stats_path <-'preds/preds_grid_stats.json'# Check and load or compute preds_grid_statsif (file.exists(grid_stats_path)) { preds_grid_stats <-fromJSON(grid_stats_path)} else { preds_grid_stats <-setNames(nm =names(df_to_g)) %>%lapply(function(name) {data.frame(df_to_g[[name]], setNames(nm = levels_for_quants) %>%sapply(function(s) { quants[[name]][[s]](x1 = df_to_g[[name]]$x1,x2 = df_to_g[[name]]$x2,d = df_to_g[[name]]$d) })) })toJSON(preds_grid_stats, pretty =TRUE, auto_unbox =TRUE) %>%write(grid_stats_path)}
Computing estimated premiums and local metrics for the simulations
pop_to_g_file <-"preds/pop_to_g.json"# Check if the JSON file existsif (file.exists(pop_to_g_file)) {message(sprintf("[%s] File exists. Reading pop_to_g from %s", Sys.time(), pop_to_g_file)) pop_to_g <-fromJSON(pop_to_g_file)} else {## From the first section of the online supplementpreds_pop_stats_theo <-fromJSON('preds/preds_pop_stats_theo.json')pop_to_g <-setNames(nm =names(sims)) %>%lapply(function(name) {message(sprintf("[%s] Processing: %s", Sys.time(), name))# Start time for this simulation start_time <-Sys.time() list_data <-list('train'= sims[[name]],'valid'= valid[[name]],'test'= test[[name]]) result <-setNames(nm =names(list_data)) %>%lapply(function(nm){ data <- list_data[[nm]]# Step 1: Compute premiumsmessage(sprintf("[%s] Step 1: Computing premiums", Sys.time())) premium_results <-setNames(nm = levels_for_premiums) %>%sapply(function(s) {message(sprintf("[%s] Computing premium: %s", Sys.time(), s)) premiums[[name]][[s]](x1 = data$X1,x2 = data$X2,d = data$D ) })# Step 2: Compute PDXmessage(sprintf("[%s] Step 2: Computing PDX", Sys.time())) pdx_results <- pdx_lgb[[name]]$pred_fun(data.frame(X1 = data$X1,X2 = data$X2,D = data$D ) )# Combine resultsmessage(sprintf("[%s] Step 5: Combining results", Sys.time()))data.frame( preds_pop_stats_theo[[name]][[nm]], premium_results,pdx = pdx_results ) }) # Log completion time end_time <-Sys.time()message(sprintf("[%s] Finished processing: %s (Duration: %.2f seconds)", end_time, name, as.numeric(difftime(end_time, start_time, units ="secs"))))return(result)})# Save the entire df_to_g object to JSONmessage(sprintf("[%s] Saving pop_to_g to %s", Sys.time(), pop_to_g_file))toJSON(pop_to_g, pretty =TRUE, auto_unbox =TRUE) %>%write(pop_to_g_file)rm(pop_to_g_theo)}pop_stats_path <-'preds/preds_pop_stats.json'# Check and load or compute preds_pop_statsif (file.exists(pop_stats_path)) { preds_pop_stats <-fromJSON(pop_stats_path)} else { preds_pop_stats <-setNames(nm =names(sims)) %>%lapply(function(name) {setNames(nm =names(pop_to_g[[name]])) %>%lapply(function(set) { local_df <- pop_to_g[[name]][[set]]data.frame(local_df, setNames(nm = levels_for_quants) %>%sapply(function(s) { quants[[name]][[s]](x1 = local_df$X1,x2 = local_df$X2,d = local_df$D) })) }) })toJSON(preds_pop_stats, pretty =TRUE, auto_unbox =TRUE) %>%write(pop_stats_path)}
Computing estimated premiums and local metrics for the experiment
Figure 2.2: Estimated best-estimate \(\widehat{\mu}^B\), unaware \(\widehat{\mu}^U\), and aware \(\widehat{\mu}^A\) premiums for scenarios 1, 2, and 3 in the Example as a function of \(x_1\), \(x_2\), and \(d\).
R code producing the estimated aware, hyperaware, corrective illustration.
Figure 2.3: Estimated aware \(\widehat{\mu}^A\), hyperaware \(\widehat{\mu}^H\), and corrective \(\widehat{\mu}^C\) premiums for scenarios 1, 2, and 3 in the Example as a function of \(x_1\), \(x_2\), and \(d\).
Chevalier, Dominik, and Marie-Pier Côté. 2024. “From Point to Probabilistic Gradient Boosting for Claim Frequency and Severity Prediction.”arXiv PreprintarXiv:2412.14916. https://arxiv.org/abs/2412.14916.
Fernandes Machado, Agathe, Suzie Grondin, Philipp Ratz, Arthur Charpentier, and François Hu. 2025. “EquiPy: Sequential Fairness Using Optimal Transport in Python.”arXiv PreprintarXiv:2503.09866. https://arxiv.org/abs/2503.09866.
Hu, Francois, Philipp Ratz, and Arthur Charpentier. 2024. “A Sequentially Fair Mechanism for Multiple Sensitive Attributes.”Proceedings of the AAAI Conference on Artificial Intelligence 38 (11): 12502–10. https://doi.org/10.1609/aaai.v38i11.29143.
Ke, Guolin, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. “LightGBM: A Highly Efficient Gradient Boosting Decision Tree.”Advances in Neural Information Processing Systems 30: 3146–54.
Lindholm, Mathias, Ronald Richman, Andreas Tsanakas, and Mario V Wüthrich. 2022. “Discrimination-Free Insurance Pricing.”ASTIN Bulletin 52 (1): 55–89.